Simultaneous profiling of the spatial distributions of multiple biological molecules at single-cell resolution has recently been enabled by the development of highly multiplexed imaging technologies. Extracting and analyzing biologically relevant information contained in complex imaging data requires the use of a diverse set of computational tools and algorithms. We developed a standardized, user-friendly, customizable, and interoperable workflow for processing and analyzing data generated by highly multiplexed imaging technologies. The steinbock framework written in python supports image pre-processing, segmentation, feature extraction, and data export in a reproducible fashion. The imcRtools R/Bioconductor package forms the bridge between image processing and single-cell analysis by directly importing data generated by steinbock. The package further supports spatial data analysis such as patch detection, interaction testing and spatial clustering, and integrates with tools developed within the Bioconductor project. Image visualization and segmentation quality control is performed using the cytomapper R/Bioconductor package. Together, the tools described in this workflow facilitate the analysis of multiplexed imaging raw data at the single-cell and spatial level.
To follow this tutorial, please visit
https://github.com/BodenmillerGroup/demos/tree/main/docs.
The compiled .html file of this workshop is hosted at:
https://bodenmillergroup.github.io/demos.
We will need to install the following packages for the workshop:
if (!requireNamespace("BiocManager", quietly = TRUE))
install.packages("BiocManager")
BiocManager::install(c("imcRtools", "cytomapper", "tidyverse",
"ggplot2", "viridis", "pheatmap", "scales")))
To reproduce the analysis, clone the repository:
git clone https://github.com/BodenmillerGroup/demos.git
and open the Bioc2022_workshop.Rmd file in the docs folder.
Highly multiplexed imaging enables the simultaneous detection of tens of biological molecules (e.g. proteins, RNA; also referred to as “markers”) in their spatial tissue context. Recently established multiplexed imaging technologies rely on cyclic staining with immunofluorescently-tagged antibodies (Lin et al. 2018; Gut, Herrmann, and Pelkmans 2018), or the use of oligonucleotide-tagged (Saka et al. 2019; Goltsev et al. 2018) or metal-tagged antibodies (Giesen et al. 2014; Angelo et al. 2014), among others. Across technologies, the acquired data are commonly stored as multi-channel images, where each pixel encodes the abundance of all acquired markers at a specific position in the tissue. After data acquisition, bioimage processing and segmentation are conducted to extract data for downstream analysis. When performing end-to-end multiplexed image analysis, the user is often faced with a diverse set of computational tools and complex analysis scripts.
Here, we present an interoperabale, modularized computational workflow to process and analyze multiplexed imaging data (Figure 1). The steinbock framework facilitates multi-channel image processing including raw data pre-processing, image segmentation and feature extraction. Data generated by steinbock can be directly read by the imcRtools R/Bioconductor package for data visualization and spatial analysis (Figure 1). The cytomapper package support image handling and composite as well as segmentation mask visualization.
The presented workflow is customizable, reproducible, user-friendly and integrates with a variety of downstream analysis strategies by employing standardized data formats. The tools comprised in this workflow support processing and analysis of data generated by a range of multiplexed imaging technologies. However, for demonstration purposes, we present data from Imaging Mass Cytometry (IMC), which relies on tissue staining with metal-labelled antibodies to jointly measure the spatial distribution of up to 40 proteins or RNA at 1μm resolution (Giesen et al. 2014; Schulz et al. 2018).
Figure 1: Overview of the multiplexed image processing and analysis workflow. Raw image data can be interactively visualized using napari plugins such as napari-imc for IMC, to assess data quality and for exploratory visualization. The steinbock framework performs image pre-processing, cell segmentation and single-cell data extraction using established approaches and standardized file formats. Data can be imported into R using the imcRtools package, which further supports spatial visualization and analysis. Storing the data in a SingleCellExperiment or SpatialExperiment object, imcRtools integrates with a variety of data analysis tools of the Bioconductor project such as cytomapper (Eling et al. 2020). Alternatively, steinbock exports data to the anndata format for analysis in Python, e.g. using squidpy.
The workshop is broadly structured in 4 parts:
The analysis approaches presented here were taken from the IMC data analysis book. The book provides more detailed information on the technical underpinnings of the analysis.
More information can also be found in our preprint
We use imaging mass cytometry data to highlight the functionality of the packages. However, any imaging technology is supported as long as the data can be read into R (memory restrictions and file type restrictions.)
In the first part of this workshop, we will present a new framework for multiplexed image processing.
To highlight the basic steps of multiplexed image analysis, we provide example data that were acquired as part of the Integrated iMMUnoprofiling of large adaptive CANcer patient cohorts projects (immucan.eu). The raw data of 4 patients can be accessed online at zenodo.org/record/5949116.
At this point, we could download the raw data to test the steinbock framework
in the next section.
In the interest of time, we will not download the data now and only
conceptually discuss the steinbock framework.
dir.create("../data/steinbock/raw")
download.file("https://zenodo.org/record/5949116/files/panel.csv",
"../data/steinbock/raw/panel.csv")
download.file("https://zenodo.org/record/5949116/files/Patient1.zip",
"../data/steinbock/raw/Patient1.zip")
download.file("https://zenodo.org/record/5949116/files/Patient2.zip",
"../data/steinbock/raw/Patient2.zip")
download.file("https://zenodo.org/record/5949116/files/Patient3.zip",
"../data/steinbock/raw/Patient3.zip")
download.file("https://zenodo.org/record/5949116/files/Patient4.zip",
"../data/steinbock/raw/Patient4.zip")
The steinbock framework offers tools for multi-channel image processing using the command-line or Python code (Windhager, Bodenmiller, and Eling 2021). Supported tasks include IMC data preprocessing, supervised multi-channel image segmentation, object quantification and data export to a variety of file formats. It further allows deep-learning enabled image segmentation. The framework is available as platform-independent Docker container, ensuring reproducibility and user-friendly installation. Read more in the Docs.
In the section above, we can download example data and create a folder
structure that the steinbock framework needs.
The basic input to steinbock looks as follows:
steinbock data/working directory
|
├── raw (user-provided, when starting from raw data)
├── panel.csv
├── Patient1.zip
├── Patient2.zip
├── Patient3.zip
├── Patient4.zip
The panel.csv contains information on the antibodies/channels used in the
experiment. In the case of IMC data, this file needs to contain an entry to
Metal Tag, Target (name of the antibody target), keep (which channels to
analyse), deepcell (which channels to aggregate for deepcell segmentation).
Each .zip file contains IMC raw data of one slide. Multiple
acquisitions/images are present in each file.
The following code chunk displays a bash script to run the steinbock framework
from IMC raw data via image segmentation to single-cell data export.
#!/usr/bin/env bash
alias steinbock="docker run -v path/to/demos/data/steinbock:/data -u $(id -u):$(id -g) ghcr.io/bodenmillergroup/steinbock:0.14.2"
# panel pre-processing
steinbock preprocess imc panel --namecol Clean_Target
# file type conversion and filtering
steinbock preprocess imc images --hpf 50
# deep learning-based segmentation
steinbock segment deepcell --minmax
# measurement
steinbock measure intensities
steinbock measure regionprops
steinbock measure neighbors --type expansion --dmax 4
The steinbock preprocess imc panel call reads in an unformatted
panel and creates a standardized panel format.
The steinbock preprocess imc images call reads in the IMC raw data,
performs a “hot pixel filtering” and writes out one .tiff file
per acquisition.
The steinbock segment deepcell --minmax uses a pre-trained neural network
to perform single-cell segmentation of the images.
Finally, the steinbock measure intensities, steinbock measure regionprops
and steinbock measure neighbors --type expansion --dmax 4 calls extract the
mean pixel intensities per cell an channel, the morphological features
of the cells and spatial cell neighbour graphs.
The final folder structure looks as follows:
steinbock data/working directory
|
├── raw (user-provided, when starting from raw data)
|
├── img (user-provided, when not starting from raw data)
├── panel.csv (user-provided, when not starting from raw data)
├── images.csv
|
├── masks
|
├── intensities
├── regionprops
└── neighbors
For easy access, we can now download the already pre-processed data:
# download intensities
url <- "https://zenodo.org/record/6642699/files/intensities.zip"
destfile <- "../data/steinbock/intensities.zip"
download.file(url, destfile)
unzip(destfile, exdir="../data/steinbock", overwrite=TRUE)
unlink(destfile)
# download regionprops
url <- "https://zenodo.org/record/6642699/files/regionprops.zip"
destfile <- "../data/steinbock/regionprops.zip"
download.file(url, destfile)
unzip(destfile, exdir="../data/steinbock", overwrite=TRUE)
unlink(destfile)
# download neighbors
url <- "https://zenodo.org/record/6642699/files/neighbors.zip"
destfile <- "../data/steinbock/neighbors.zip"
download.file(url, destfile)
unzip(destfile, exdir="../data/steinbock", overwrite=TRUE)
unlink(destfile)
# download images
url <- "https://zenodo.org/record/6642699/files/img.zip"
destfile <- "../data/steinbock/img.zip"
download.file(url, destfile)
unzip(destfile, exdir="../data/steinbock", overwrite=TRUE)
unlink(destfile)
# download masks
url <- "https://zenodo.org/record/6642699/files/masks_deepcell.zip"
destfile <- "../data/steinbock/masks_deepcell.zip"
download.file(url, destfile)
unzip(destfile, exdir="../data/steinbock", overwrite=TRUE)
unlink(destfile)
# download individual files
download.file("https://zenodo.org/record/6642699/files/panel.csv",
"../data/steinbock/panel.csv")
download.file("https://zenodo.org/record/6642699/files/images.csv",
"../data/steinbock/images.csv")
After image processing, we have developed R/Bioconductor packages that read in the processed single-cell data, the images and the segmentation masks.
The imcRtools package supports the handling and analysis of imaging mass cytometry and other highly multiplexed imaging data. The main functionalities include reading in single-cell data after image segmentation and measurement, data formatting to perform channel spillover correction and a number of spatial analysis approaches.
In the first instance, we can read in the single-cell data as processed using
steinbock by calling the read_steinbock function:
library(imcRtools)
spe <- read_steinbock("../data/steinbock/")
spe
## class: SpatialExperiment
## dim: 40 46917
## metadata(0):
## assays(1): counts
## rownames(40): MPO HistoneH3 ... DNA1 DNA2
## rowData names(11): channel name ... Final.Concentration...Dilution
## uL.to.add
## colnames: NULL
## colData names(8): sample_id ObjectNumber ... width_px height_px
## reducedDimNames(0):
## mainExpName: NULL
## altExpNames(0):
## spatialCoords names(2) : Pos_X Pos_Y
## imgData names(1): sample_id
By default, single-cell data is read in as SpatialExperiment object.
The summarized pixel intensities per channel and cell (here mean intensity) are
stored in the counts slot. Columns represent cells and rows represent channels.
counts(spe)[1:5,1:5]
## [,1] [,2] [,3] [,4] [,5]
## MPO 0.6273888 0.4500000 0.5286462 1.019142 0.4000000
## HistoneH3 3.4116090 13.0472305 2.5331530 9.596759 2.8927974
## SMA 0.2837388 2.0064459 0.1631139 1.455513 0.2264860
## CD16 2.1288451 2.7879388 2.1463270 18.429319 0.8185134
## CD38 0.2760149 0.7482431 1.1644259 2.324016 0.5419144
Metadata associated to individual cells are stored in the colData slot. After
initial image processing, these metadata include the numeric identifier (ObjectNumber),
the area, and morphological features of each cell. In addition, sample_id stores
the image name from which each cell was extracted and the width and height of the
corresponding images are stored.
head(colData(spe))
## DataFrame with 6 rows and 8 columns
## sample_id ObjectNumber area major_axis_length minor_axis_length
## <character> <numeric> <numeric> <numeric> <numeric>
## 1 Patient1_001 1 11 7.00009 1.90442
## 2 Patient1_001 2 20 14.05565 1.95929
## 3 Patient1_001 3 16 9.16515 2.00000
## 4 Patient1_001 4 19 7.68906 3.08022
## 5 Patient1_001 5 10 6.00000 1.95959
## 6 Patient1_001 6 20 8.06811 3.14732
## eccentricity width_px height_px
## <numeric> <numeric> <numeric>
## 1 0.962281 600 600
## 2 0.990237 600 600
## 3 0.975900 600 600
## 4 0.916254 600 600
## 5 0.945163 600 600
## 6 0.920775 600 600
The read_steinbock function can also read in the single-cell data as
SingleCellExperiment object.
The main difference between the SpatialExperiment and the
SingleCellExperiment data container in the current setting is the way spatial
locations of all cells are stored. For the SingleCellExperiment container, the
locations are stored in the colData slot while the SpatialExperiment
container stores them in the spatialCoords slot:
head(spatialCoords(spe))
## Pos_X Pos_Y
## 1 468.8182 0.3636364
## 2 516.4500 0.4000000
## 3 587.5000 0.5000000
## 4 192.2632 0.8947368
## 5 231.5000 0.4000000
## 6 270.8000 0.8500000
The spatial object graphs generated by steinbock are read into a colPair
slot of the SpatialExperiment (or SingleCellExperiment) object. Cell-cell
interactions (cells in close spatial proximity) are represented as “edge list”
(stored as SelfHits object). Here, the left side represents the column indices
of the “from” cells and the right side represents the column indices of the “to”
cells. In the last part of this workflow, we will highlight the visualization of
the spatial object graphs.
colPair(spe, "neighborhood")
## SelfHits object with 247098 hits and 0 metadata columns:
## from to
## <integer> <integer>
## [1] 1 27
## [2] 1 54
## [3] 2 10
## [4] 2 43
## [5] 3 16
## ... ... ...
## [247094] 46916 46894
## [247095] 46917 46854
## [247096] 46917 46879
## [247097] 46917 46888
## [247098] 46917 46912
## -------
## nnode: 46917
Finally, metadata regarding the channels are stored in the rowData slot. This
information is extracted from the panel.csv file. Channels are ordered by
isotope mass and therefore match the channel order of the multi-channel images.
head(rowData(spe))
## DataFrame with 6 rows and 11 columns
## channel name keep ilastik deepcell Tube.Number
## <character> <character> <numeric> <numeric> <numeric> <numeric>
## MPO Y89 MPO 1 NA NA 2101
## HistoneH3 In113 HistoneH3 1 1 1 2113
## SMA In115 SMA 1 NA NA 1914
## CD16 Pr141 CD16 1 NA NA 2079
## CD38 Nd142 CD38 1 NA NA 2095
## HLADR Nd143 HLADR 1 NA NA 2087
## Target Antibody.Clone Stock.Concentration
## <character> <character> <numeric>
## MPO Myeloperoxidase MPO Polyclonal MPO 500
## HistoneH3 Histone H3 D1H2 500
## SMA SMA 1A4 500
## CD16 CD16 EPR16784 500
## CD38 CD38 EPR4106 500
## HLADR HLA-DR TAL 1B5 500
## Final.Concentration...Dilution uL.to.add
## <character> <character>
## MPO 4 ug/mL 0.8
## HistoneH3 1 ug/mL 0.2
## SMA 0.25 ug/mL 0.05
## CD16 5 ug/mL 1
## CD38 2.5 ug/mL 0.5
## HLADR 1 ug/mL 0.2
We already provide a SpatialExperiment object that contains the cell phenotype
information as detected following the
IMC data analysis book.
download.file("https://zenodo.org/record/6810879/files/spe.rds",
"../data/spe.rds")
(spe <- readRDS("../data/spe.rds"))
## class: SpatialExperiment
## dim: 40 46825
## metadata(4): color_vectors cluster_codes SOM_codes delta_area
## assays(2): counts exprs
## rownames(40): MPO HistoneH3 ... DNA1 DNA2
## rowData names(15): channel name ... marker_class used_for_clustering
## colnames(46825): Patient1_001_1 Patient1_001_2 ... Patient4_008_2772
## Patient4_008_2773
## colData names(20): sample_id ObjectNumber ... cell_labels celltype
## reducedDimNames(8): UMAP TSNE ... seurat UMAP_seurat
## mainExpName: NULL
## altExpNames(0):
## spatialCoords names(2) : Pos_X Pos_Y
## imgData names(1): sample_id
We developed the cytomapper package for handling and visualization of multi-channel images and segmentation masks. The main functions of this package allow 1. reading in multi-channel images and segmentation mask, 2. the visualisation of pixel-level information across multiple channels, 3. the display of cell-level information (expression and/or metadata) on segmentation masks and 4. gating and visualisation of single cells.
The loadImages function is used to read in processed multi-channel images and
their corresponding segmentation masks. Of note, the multi-channel images
generated by steinbock are saved as 32-bit images while the segmentation masks
are saved as 16-bit images. To correctly scale pixel values of the segmentation
masks when reading them in set as.is = TRUE.
library(cytomapper)
images <- loadImages("../data/steinbock/img/")
masks <- loadImages("../data/steinbock/masks_deepcell/", as.is = TRUE)
In the case of multi-channel images, it is beneficial to set the channelNames
for easy visualization. Using the steinbock framework, the channel order of
the single-cell data matches the channel order of the multi-channel images.
However, it is recommended to make sure that the channel order is identical
between the single-cell data and the images.
channelNames(images) <- rownames(spe)
images
## CytoImageList containing 14 image(s)
## names(14): Patient1_001 Patient1_002 Patient1_003 Patient2_001 Patient2_002 Patient2_003 Patient2_004 Patient3_001 Patient3_002 Patient3_003 Patient4_005 Patient4_006 Patient4_007 Patient4_008
## Each image contains 40 channel(s)
## channelNames(40): MPO HistoneH3 SMA CD16 CD38 HLADR CD27 CD15 CD45RA CD163 B2M CD20 CD68 Ido1 CD3 LAG3 / LAG33 CD11c PD1 PDGFRb CD7 GrzB PDL1 TCF7 CD45RO FOXP3 ICOS CD8a CarbonicAnhydrase CD33 Ki67 VISTA CD40 CD4 CD14 Ecad CD303 CD206 cleavedPARP DNA1 DNA2
For further visualization we will need to add additional metadata to the
elementMetadata slot of the CytoImageList objects. This slot is easily
accessible using the mcols function.
Here, we will save the sample_id information within the elementMetadata slot
of the multi-channel images and segmentation masks objects. It is crucial that
the order of the images in both CytoImageList objects is the same.
library(tidyverse)
all.equal(names(images), names(masks))
## [1] TRUE
mcols(images) <- mcols(masks) <- DataFrame(sample_id = names(images))
An alternative way of generating a SingleCellExperiment object directly
from the multi-channel images and segmentation masks is supported by the
measureObjects
function of the cytomapper package. For each cell present in the masks
object, the function computes the mean pixel intensity per channel as well as
morphological features (area, radius, major axis length, eccentricity) and the
location of cells:
cytomapper_sce <- measureObjects(masks, image = images, img_id = "sample_id")
cytomapper_sce
## class: SingleCellExperiment
## dim: 40 46917
## metadata(0):
## assays(1): counts
## rownames(40): MPO HistoneH3 ... DNA1 DNA2
## rowData names(0):
## colnames: NULL
## colData names(8): sample_id object_id ... m.majoraxis m.eccentricity
## reducedDimNames(0):
## mainExpName: NULL
## altExpNames(0):
In the next section, we will highlight the use of the cytomapper
package to visualize multi-channel images and segmentation masks.
For convenience, we will select three example images and their corresponding segmentation masks.
# Sample images
set.seed(220517)
cur_id <- sample(unique(spe$sample_id), 3)
cur_images <- images[names(images) %in% cur_id]
cur_masks <- masks[names(masks) %in% cur_id]
The following section gives examples for visualizing multiple channels as
pseudo-color composite images. For this the cytomapper package exports the
plotPixels function which expects a CytoImageList object storing one or
multiple multi-channel images.
The following example highlights the visualization of 6 markers (maximum allowed number of markers) at once per image. The markers indicate the spatial distribution of tumor cells (E-caherin), T cells (CD3), B cells (CD20), CD8+ T cells (CD8a), plasma cells (CD38) and proliferating cells (Ki67).
plotPixels(cur_images,
colour_by = c("Ecad", "CD3", "CD20", "CD8a", "CD38", "Ki67"),
bcg = list(Ecad = c(0, 5, 1),
CD3 = c(0, 5, 1),
CD20 = c(0, 5, 1),
CD8a = c(0, 5, 1),
CD38 = c(0, 8, 1),
Ki67 = c(0, 5, 1)))
In the following section, we will show examples on how to visualize single cells as segmentation masks. This type of visualization allows to observe the spatial distribution of cell phenotypes, the visual assessment of morphological features and quality control in terms of cell segmentation and phenotyping.
The cytomapper package provides the plotCells function that accepts a
CytoImageList object containing segmentation masks. These are defined as
single channel images where sets of pixels with the same integer ID identify
individual cells. This integer ID can be found as an entry in the colData(spe)
slot and as pixel information in the segmentation masks. The entry in
colData(spe) needs to be specified via the cell_id argument to the
plotCells function. In that way, data contained in the SpatialExperiment
object can be mapped to segmentation masks. For the current dataset, the cell
IDs are stored in colData(spe)$ObjectNumber.
As cell IDs are only unique within a single image, plotCells also requires
the img_id argument. This argument specifies the colData(spe) as well as the
mcols(masks) entry that stores the unique image name from which each cell was
extracted. In the current dataset the unique image names are stored in
colData(spe)$sample_id and mcols(masks)$sample_id.
Providing these two entries that allow mapping between the SpatialExperiment
object and segmentation masks, we can now color individual cells based on their
cell type:
plotCells(cur_masks,
object = spe,
cell_id = "ObjectNumber", img_id = "sample_id",
colour_by = "celltype")
For consistent visualization, the plotCells function takes a named list as
color argument. The entry name must match the colour_by argument.
plotCells(cur_masks,
object = spe,
cell_id = "ObjectNumber", img_id = "sample_id",
colour_by = "celltype",
colour = list(celltype = metadata(spe)$color_vectors$celltype))
If only individual cell types should be visualized, the SpatialExperiment
object can be subsetted (e.g., to only contain CD8+ T cells). In the following
example CD8+ T cells are colored in red and all other cells that are not
contained in the dataset are colored in white (as set by the missing_color
argument).
CD8 <- spe[,spe$celltype == "CD8"]
plotCells(cur_masks,
object = CD8,
cell_id = "ObjectNumber", img_id = "sample_id",
colour_by = "celltype",
colour = list(celltype = c(CD8 = "red")),
missing_colour = "white")
The imcRtools package contains a number of spatial analysis approaches. First,
cell-cell interactions are detected via spatial graph construction; these graphs
can be visualized with cells representing nodes and interactions representing
edges. Furthermore, per cell, its direct neighbours are summarized to allow
spatial clustering. Per image/grouping level, interactions between types of
cells are counted, averaged and compared against random permutations. In that
way, types of cells that interact more (attraction) or less (avoidance)
frequently than expected by chance are detected.
Many spatial analysis approaches either compare the observed versus expected number of cells around a given cell type (point process) or utilize interaction graphs (spatial object graphs) to estimate clustering or interaction frequencies between cell types.
The steinbock framework allows the construction of these spatial graphs. During image processing, we have constructed a spatial graph by expanding the individual cell masks by 4 pixels.
The imcRtools package further allows the ad hoc consctruction of spatial
graphs directly using a SpatialExperiment or SingleCellExperiment object
while considering the spatial location (centroids) of individual cells. The
buildSpatialGraph
function allows constructing spatial graphs by detecting the k-nearest neighbors
in 2D (knn), by detecting all cells within a given distance to the center cell
(expansion) and by Delaunay triangulation (delaunay).
When constructing a knn graph, the number of neighbors (k) needs to be set and
(optionally) the maximum distance to consider (max_dist) can be specified.
When constructing a graph via expansion, the distance to expand (threshold)
needs to be provided. For graphs constructed via Delaunay triangulation,
the max_dist parameter can be set to avoid unusually large connections at the
edge of the image.
spe <- buildSpatialGraph(spe, img_id = "sample_id", type = "knn", k = 20)
spe <- buildSpatialGraph(spe, img_id = "sample_id", type = "expansion", threshold = 20)
spe <- buildSpatialGraph(spe, img_id = "sample_id", type = "delaunay", max_dist = 50)
The spatial graphs are stored in colPair(spe, name) slots. These slots store
SelfHits objects representing edge lists in which the first column indicates
the index of the “from” cell and the second column the index of the “to” cell.
Each edge list is newly constructed when subsetting the object.
colPairNames(spe)
## [1] "neighborhood" "knn_interaction_graph"
## [3] "expansion_interaction_graph" "delaunay_interaction_graph"
Here, colPair(spe, "neighborhood") stores the spatial graph constructed by
steinbock, colPair(spe, "knn_interaction_graph") stores the knn spatial
graph, colPair(spe, "expansion_interaction_graph") stores the expansion graph
and colPair(spe, "delaunay_interaction_graph") stores the graph constructed by
Delaunay triangulation.
The previous section highlights the use of the cytomapper package to visualize multi-channel images and segmentation masks. Here, we introduce the plotSpatial function of the imcRtools package to visualize the cells’ centroids and cell-cell interactions as spatial graphs.
In the following example, we select one image for visualization purposes.
Here, each dot (node) represents a cell and edges are drawn between cells
in close physical proximity as detected by steinbock or the buildSpatialGraph
function. Nodes are variably colored based on the cell type and edges are
colored in grey.
library(ggplot2)
library(viridis)
# steinbock interaction graph
plotSpatial(spe[,spe$sample_id == "Patient3_001"],
node_color_by = "celltype",
img_id = "sample_id",
draw_edges = TRUE,
colPairName = "neighborhood",
nodes_first = FALSE,
edge_color_fix = "grey") +
scale_color_manual(values = metadata(spe)$color_vectors$celltype) +
ggtitle("steinbock interaction graph")
# knn interaction graph
plotSpatial(spe[,spe$sample_id == "Patient3_001"],
node_color_by = "celltype",
img_id = "sample_id",
draw_edges = TRUE,
colPairName = "knn_interaction_graph",
nodes_first = FALSE,
edge_color_fix = "grey") +
scale_color_manual(values = metadata(spe)$color_vectors$celltype) +
ggtitle("knn interaction graph")
# expansion interaction graph
plotSpatial(spe[,spe$sample_id == "Patient3_001"],
node_color_by = "celltype",
img_id = "sample_id",
draw_edges = TRUE,
colPairName = "expansion_interaction_graph",
nodes_first = FALSE,
directed = FALSE,
edge_color_fix = "grey") +
scale_color_manual(values = metadata(spe)$color_vectors$celltype) +
ggtitle("expansion interaction graph")
# delaunay interaction graph
plotSpatial(spe[,spe$sample_id == "Patient3_001"],
node_color_by = "celltype",
img_id = "sample_id",
draw_edges = TRUE,
colPairName = "delaunay_interaction_graph",
nodes_first = FALSE,
edge_color_fix = "grey") +
scale_color_manual(values = metadata(spe)$color_vectors$celltype) +
ggtitle("delaunay interaction graph")
Finally, the plotSpatial function allows displaying all images at once. This
visualization can be useful to quickly detect larger structures of interest.
plotSpatial(spe,
node_color_by = "celltype",
img_id = "sample_id",
node_size_fix = 0.5) +
scale_color_manual(values = metadata(spe)$color_vectors$celltype)